Goto

Collaborating Authors

 brain activity


Why some people cannot move on from the death of a loved one

New Scientist

Prolonged grief disorder affects around 1 in 20 people, and we're starting to understand the neuroscience behind it For most people, the intense sting of grief eases with time. For some, however, persistent and painful grief remains, developing into prolonged grief disorder. A new review of the condition, which affects around 5 per cent of bereaved people, sheds light on how it develops. This could help doctors predict which recently bereaved people will benefit from extra support. The decision to include prolonged grief disorder (PGD) in the American Psychiatric Association's diagnostic manual in 2022 sparked intense debate over whether it was pathologising a normal human response to loss and imposing an arbitrary timeline on what constitutes "normal" grief.


Study of Buddhist Monks Finds Meditation Alters Brain Activity

WIRED

New research reinforces that it's a mind-altering, dynamic state that promotes focus, learning, and well-being. If you've ever considered practicing meditation, you might believe you should relax, breathe, and empty your mind of distracting thoughts. Novices tend to think of meditation as the brain at rest, but a new international study concludes that this ancient practice is quite the opposite: Meditation is a state of heightened cerebral activity that profoundly alters brain dynamics. Researchers from the University of Montreal and Italy's National Research Council recruited 12 monks of the Thai Forest Tradition at Santacittārāma, a Buddhist monastery outside Rome. In a laboratory in Chieti-Pescara, scientists analyzed the brain activity of these meditation practitioners using magnetoencephalography (MEG), technology capable of recording with great precision the brain's electrical signals.





Would YOU sit on it? Scientists develop a futuristic chair that puts you in an 'altered state of mind' within minutes

Daily Mail - Science & tech

Alex Pretti's Minneapolis death was murder, Americans declare in damning poll as voters issue new demand to Trump... and reveal how few back the shooting'Greedy pig' Harry Styles is shamefully exploiting obsessed women. I know... because it happened to me: LIZ JONES My sister confided an unbearable secret about her boyfriend. Keeping quiet is intolerable... our mother will be devastated: DEAR JANE Trump accounts: Million-dollar baby plan aims to create a fortune for America's newest arrivals before age 30 Nicki Minaj flashes dagger-long nails as she clutches Trump's hand after gushing she's his No. 1 fan Bryan Kohberger's warped requests from behind bars leave prison guards sickened... as new pictures of Idaho murders reveal full extent of his barbarity Bruce Willis' wife Emma makes heartbreaking admission about star's dementia battle Hilarious live gaffe on David Muir's World News Tonight that'triggered behind the scenes meltdown' Haley Kalil confident her bitter lawsuit with ex-NFL star husband will be thrown out as she cites'free speech' after revealing size of his manhood'He was Mr Perfect... now we're seeing his true colours': How Harry Styles cultivated his'good boy' image... and why fans are now turning on him after this controversial new move Mom who gave all four of her daughters the same name slams critics: 'Our family doesn't need outside approval' Brooklyn Beckham and Nicola Peltz share photo of the'world's most expensive wine' at £17,000 a BOTTLE... as it's revealed she gets a '$1m monthly allowance' from her billionaire father Would YOU sit on it? Scientists develop a futuristic chair that puts you in an'altered state of mind' within minutes READ MORE: People are using'binaural beats' to simulate the effects of drugs Would you be brave enough to sit on a chair that can send you into an'altered state of mind' within minutes? That is the wild promise of the Aiora chair, a futuristic seat designed by scientists from the University of Essex and British furniture company DavidHugh LTD.


Exploring the trade-off between deep-learning and explainable models for brain-machine interfaces

Neural Information Processing Systems

People with brain or spinal cord-related paralysis often need to rely on others for basic tasks, limiting their independence. A potential solution is brain-machine interfaces (BMIs), which could allow them to voluntarily control external devices (e.g., robotic arm) by decoding brain activity to movement commands. In the past decade, deep-learning decoders have achieved state-of-the-art results in most BMI applications, ranging from speech production to finger control. However, the'black-box' nature of deep-learning decoders could lead to unexpected behaviors, resulting in major safety concerns in real-world physical control scenarios. In these applications, explainable but lower-performing decoders, such as the Kalman filter (KF), remain the norm. In this study, we designed a BMI decoder based on KalmanNet, an extension of the KF that augments its operation with recurrent neural networks to compute the Kalman gain.


EEG2Video: Towards Decoding Dynamic Visual Perception from EEG Signals

Neural Information Processing Systems

Our visual experience in daily life are dominated by dynamic change. Decoding such dynamic information from brain activity can enhance the understanding of the brain's visual processing system. However, previous studies predominately focus on reconstructing static visual stimuli. In this paper, we explore to decode dynamic visual perception from electroencephalography (EEG), a neuroimaging technique able to record brain activity with high temporal resolution (1000 Hz) for capturing rapid changes in brains. Our contributions are threefold: Firstly, we develop a large dataset recording signals from 20 subjects while they were watching 1400 dynamic video clips of 40 concepts.


Inducing brain-relevant bias in natural language processing models

Neural Information Processing Systems

Progress in natural language processing (NLP) models that estimate representations of word sequences has recently been leveraged to improve the understanding of language processing in the brain. However, these models have not been specifically designed to capture the way the brain represents language meaning. We hypothesize that fine-tuning these models to predict recordings of brain activity of people reading text will lead to representations that encode more brain-activity-relevant language information. We demonstrate that a version of BERT, a recently introduced and powerful language model, can improve the prediction of brain activity after fine-tuning. We show that the relationship between language and brain activity learned by BERT during this fine-tuning transfers across multiple participants. We also show that, for some participants, the fine-tuned representations learned from both magnetoencephalography (MEG) and functional magnetic resonance imaging (fMRI) are better for predicting fMRI than the representations learned from fMRI alone, indicating that the learned representations capture brain-activity-relevant information that is not simply an artifact of the modality. While changes to language representations help the model predict brain activity, they also do not harm the model's ability to perform downstream NLP tasks. Our findings are notable for research on language understanding in the brain.


Reconstructing the Mind's Eye: fMRI-to-Image with Contrastive Learning and Diffusion Priors

Neural Information Processing Systems

We present MindEye, a novel fMRI-to-image approach to retrieve and reconstruct viewed images from brain activity. Our model comprises two parallel submodules that are specialized for retrieval (using contrastive learning) and reconstruction (using a diffusion prior). MindEye can map fMRI brain activity to any high dimensional multimodal latent space, like CLIP image space, enabling image reconstruction using generative models that accept embeddings from this latent space. We comprehensively compare our approach with other existing methods, using both qualitative side-by-side comparisons and quantitative evaluations, and show that MindEye achieves state-of-the-art performance in both reconstruction and retrieval tasks. In particular, MindEye can retrieve the exact original image even among highly similar candidates indicating that its brain embeddings retain fine-grained image-specific information. This allows us to accurately retrieve images even from large-scale databases like LAION-5B. We demonstrate through ablations that MindEye's performance improvements over previous methods result from specialized submodules for retrieval and reconstruction, improved training techniques, and training models with orders of magnitude more parameters. Furthermore, we show that MindEye can better preserve low-level image features in the reconstructions by using img2img, with outputs from a separate autoencoder. All code is available on GitHub.